Back

Ear & Hearing

Ovid Technologies (Wolters Kluwer Health)

All preprints, ranked by how well they match Ear & Hearing's content profile, based on 15 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.

1
AI-enhanced behavioral approach to measuring hearing in infants and toddlers: Proof-of-Concept Study

Schlittenlacher, J.; Blankenship, C.; Jackson, I.; Visram, A.; Munro, K.; Hunter, L.; Moore, D. R.

2025-07-11 pediatrics 10.1101/2025.07.10.25331271 medRxiv
Top 0.1%
55.2%
Show abstract

ObjectiveShow that a basic unsupervised machine learning (ML) algorithm can give information on the direction of child and infant reactions to sound using non-identifiable video-recorded facial features. DesignInfants and toddlers were presented warble tones or single-syllable utterances 45 degrees to the left or right. A camera recorded their reactions, from which features like head turns or eye gaze were extracted with OpenFace. Three clusters were formed using Expectation Maximization on 80% of the toddler data. The remaining 20% and all infant data were used to verify if the clusters represent groups for sound presentations to the left, to the right, and both directions. Study Sample28 infants (2-5 months) and 30 toddlers (2-4 years), born preterm (<32 weeks gestational age) were presented ten sounds each. ResultsThe largest cluster comprised 90% of the trials with sound presentations in both directions, indicating "no decision." The remaining two clusters could be interpreted to represent reactions to the left and the right, respectively, and average sensitivities of 96% for the toddlers and 68% for the infants. ConclusionsA simple machine learning algorithm demonstrated that it can form correct decisions on the direction of sound presentation using non-identifiable facial behavioural data.

2
Peripheral neural synchrony in post-lingually deafened adult cochlear implant users

Skidmore, J.; Bruce, I. C.; Yuan, Y.; He, S.

2023-07-08 otolaryngology 10.1101/2023.07.07.23292369 medRxiv
Top 0.1%
44.6%
Show abstract

ObjectiveThis paper reports a noninvasive method for quantifying neural synchrony in the cochlear nerve (i.e., peripheral neural synchrony) in cochlear implant (CI) users, which allows for evaluating this physiological phenomenon in human CI users for the first time in the literature. In addition, this study assessed how peripheral neural synchrony was correlated with temporal resolution acuity and speech perception outcomes measured in quiet and in noise in post-lingually deafened adult CI users. It tested the hypothesis that peripheral neural synchrony was an important factor for temporal resolution acuity and speech perception outcomes in noise in post-lingually deafened adult CI users. DesignStudy participants included 24 post-lingually deafened adult CI users with a Cochlear Nucleus(R) device. Three study participants were implanted bilaterally, and each ear was tested separately. For each of the 27 implanted ears tested in this study, 400 sweeps of the electrically evoked compound action potential (eCAP) were measured at four electrode locations across the electrode array. Peripheral neural synchrony was quantified at each electrode location using the phase locking value (PLV), which is a measure of trial-by-trial phase coherence among eCAP sweeps/trials. Temporal resolution acuity was evaluated by measuring the within-channel gap detection threshold (GDT) using a three-alternative, forced-choice procedure in a subgroup of 20 participants (23 implanted ears). For each ear tested in these participants, GDTs were measured at two electrode locations with a large difference in PLVs. For 26 implanted ears tested in 23 participants, speech perception performance was evaluated using Consonant-Nucleus-Consonant (CNC) word lists presented in quiet and in noise at signal-to-noise ratios (SNRs) of +10 and +5 dB. Linear Mixed effect Models were used to evaluate the effect of electrode location on the PLV and the effect of the PLV on GDT after controlling for the stimulation level effects. Pearson product-moment correlation tests were used to assess the correlations between PLVs, CNC word scores measured in different conditions, and the degree of noise effect on CNC word scores. ResultsThere was a significant effect of electrode location on the PLV after controlling for the effect of stimulation level. There was a significant effect of the PLV on GDT after controlling for the effects of stimulation level, where higher PLVs (greater synchrony) led to lower GDTs (better temporal resolution acuity). PLVs were not significantly correlated with CNC word scores measured in any listening condition or the effect of competing background noise presented at a SNR of +10 dB on CNC word scores. In contrast, there was a significant negative correlation between the PLV and the degree of noise effect on CNC word scores for a competing background noise presented at a SNR of +5 dB, where higher PLVs (greater synchrony) correlated with smaller noise effects on CNC word scores. ConclusionsThis newly developed method can be used to assess peripheral neural synchrony in CI users, a physiological phenomenon that has not been systematically evaluated in electrical hearing. Poorer peripheral neural synchrony leads to lower temporal resolution acuity and is correlated with a larger detrimental effect of competing background noise presented at a SNR of 5 dB on speech perception performance in post-lingually deafened adult CI users.

3
The relationships between cochlear nerve health and AzBio sentence scores in quiet and noise in postlingually deafened adult cochlear implant users

Gao, Z.; Yuan, Y.; Oleson, J. J.; Mueller, C. R.; Bruce, I. C.; Gifford, R. H.; He, S.

2024-11-18 otolaryngology 10.1101/2024.11.16.24317332 medRxiv
Top 0.1%
44.1%
Show abstract

ObjectivesThis study investigated the relationships between the cochlear nerve (CN) health and sentence-level speech perception outcomes measured in quiet and noise in postlingually deafened adult cochlear implant (CI) users. DesignStudy participants included 28 postlingually deafened adult CI users with a Cochlear(R) Nucleus device. For each participant, only one ear was tested. Neural health of the CN was assessed at three or four electrode locations across the electrode array using two parameters derived from results of the electrically evoked compound action potential (eCAP). One parameter was the phase locking value (PLV) which estimated neural synchrony in the CN. The other parameter was the sensitivity of the eCAP amplitude growth function (AGF) slope to changes in the interphase gap (IPG) of biphasic electrical pulses (i.e., the IPGEslope). Speech perception was tested using AzBio sentences in both quiet and a ten-talker babble background noise with +5 dB and +10 dB signal-to-noise ratios (SNR). IPGEslope and PLV values were averaged across electrodes for each subject, both with and without weighting by the frequency importance function (FIF) of the AzBio sentences. Pearson and Spearman correlations were used to assess the pairwise relationships between the IPGEslope, the PLV, and age. Multiple linear regression models with AzBio score as the outcome and the PLV and the IPGEslope as predictors were used to evaluate the associations between the three variables while controlling for age. ResultsThe IPGEslope and the PLV demonstrated different patterns with regards to their relationships with electrode location, age, and speech perception. The PLV, but not the IPGEslope, differed significantly across electrodes, where the apical electrodes had larger PLVs (better neural synchrony) than the basal electrodes. The IPGEslope, but not the PLV, was significantly correlated with participants age, where smaller IPGEslope values (poorer spiral ganglion neuron density) were associated with more advanced age. The PLV, but not the IPGEslope, was significantly associated with AzBio scores in the +5 dB SNR condition, where larger PLVs predicted better speech perception. Neither the PLV nor the IPGEslope was significantly associated with AzBio score in quiet or in the +10 dB SNR condition. The result patterns remained the same regardless of whether the mean values of the IPGEslope and the PLV were weighted by the AzBio FIF. The result patterns generally did not change with fitting methods or input/output scales of the AGF slopes. ConclusionsThe IPGEslope and the PLV quantify different aspects of CN health. The positive association between the PLV and AzBio scores in the +5 dB SNR condition suggests that neural synchrony is important for speech perception in adult CI users in challenging listening conditions with a relatively high noise level. The lack of association between age and the PLV indicates that reduced neural synchrony in the CN is unlikely the primary factor accounting for the greater deficits in understanding speech in noise observed in older CI users, as compared to middle-aged CI users.

4
Neural sensitivity in cochlear implantees determined by electrically-evoked compound action potentials (ECAP) and focused perceptual thresholds

Wohlbauer, D. M.; Arenberg, J. G.

2025-11-06 otolaryngology 10.1101/2025.10.30.25338964 medRxiv
Top 0.1%
39.9%
Show abstract

PurposeThe sensitivity of auditory neurons in response to electric stimulation in cochlear implant (CI) listeners may reflect neuronal health, which is a major contributor to variability in performance outcomes among CI listeners. In the current study, we explore the interplay of three outcome measures of neural sensitivity, perceptual focused thresholds, objective electrically-evoked compound action potentials (ECAP), and the Failure Index (FI). We further explore the influence of CI experience and how the measures may contribute to speech perception performance. MethodsWe examined focused perceptual threshold measures, ECAP stimulation levels and N1P2 peak response amplitudes, and ECAP input-to-output relationships in 29 adult CI recipients (14 females, 11 males, four who were bilaterally implanted). Pearson correlation analysis was performed to investigate subject specific relationships across CI electrodes, and linear mixed-effects models (LMM) were used to identify links between the outcome measures accounting for individual variation. ResultsIndividual outcomes revealed large within and across subject variability for focused thresholds, ECAP peak amplitudes, and FI. The LMMs showed, that low focused threshold measurements correspond to large ECAP peak amplitudes, while high ECAP stimulation levels reflect large ECAP peak amplitudes. Furthermore, clinical speech perception seems to be influenced by the relationship of focused thresholds and FI, with lower performance for stronger associations. ConclusionOur findings suggest that the combination of objective ECAP response measures and perceptual measures could be a robust estimate of neural health and may act as an early estimate of the speech performance abilities in CI listeners.

5
The fathers singing voice may impact premature infants brain more than their mothers: A study protocol and preliminary data on a singing and EEG randomized controlled trial (RCT) based on the fundamental frequency of voice and kinship parameters

Papatzikis, E.; Dimitropoulos, K.; Tataropoulou, K.; Kyrtsoudi, M.; Pasoudi, E.; OToole, J. M.; Nika, A.

2024-09-30 pediatrics 10.1101/2024.09.29.24314570 medRxiv
Top 0.1%
34.4%
Show abstract

This article presents the study protocol for a randomized controlled trial (RCT) investigating the impact of singing on the brain activity of premature infants in the Neonatal Intensive Care Unit (NICU). The study focuses on how the differentiation of voices, as defined by the fundamental frequency (F0) shaped by biological sex and kinship, influences neurophysiological responses when measured by electroencephalography (EEG). Premature infants, who are highly sensitive to auditory stimuli, may benefit from music-based interventions; however, there is limited understanding of how voice variations between male and female caregivers, and whether they are biologically related, affect brain activity. Our protocol outlines a structured intervention where infants are exposed to singing by four facilitators - a male music therapist, a female music therapist, the mother, and the father - and includes two singing stages: a sustained note (A at 440 Hz) and a 90-second lullaby, both interspersed with silent periods to allow for baseline measurements. EEG recordings track brain activity throughout these sessions, followed by quantitative EEG (qEEG) analysis and thorough statistical computations (e.g., mixed-effects models, spectral power analysis, and post-hoc tests) to explore how these auditory stimuli influence brain function. Preliminary data from five infants show that maternal singing elicits the highest delta spectral power in all measured conditions except during the lullaby song, where paternal singing elicits the highest effects followed by the male music therapist and then the mother. These early findings highlight the potential influence of parental voices, particularly the fathers voice, on neonatal brain development, while the detailed study protocol ensures rigor and replicability, providing a robust framework for future research. Additionally, this protocol lays the groundwork for exploring the long-term effects of music-based interventions, with the goal of improving neurodevelopmental outcomes in premature infants through tailored auditory stimulation. (clinincaltrials.gov unique identifier: NCT06398912)

6
Auditory grouping ability predicts speech-in-noise performance in cochlear implants

Choi, I.; Gander, P. E.; Berger, J. I.; Hong, J.; Colby, S.; McMurray, B.; Griffiths, T. D.

2022-05-31 otolaryngology 10.1101/2022.05.30.22275790 medRxiv
Top 0.1%
33.8%
Show abstract

ObjectivesCochlear implant (CI) users exhibit a large variance in understanding speech in noise (SiN). Past works in CI users found that spectral and temporal resolutions correlate with the SiN ability, but a large portion of variance has been remaining unexplained. Our groups recent work on normal-hearing listeners showed that the ability of grouping temporally coherent tones in a complex auditory scene predicts SiN ability, highlighting a central mechanism of auditory scene analysis that contributes to SiN. The current study examined whether the auditory grouping ability contributes to SiN understanding in CI users as well. Design47 post-lingually deafened CI users performed multiple tasks including sentence-in-noise understanding, spectral ripple discrimination, temporal modulation detection, and stochastic figure-ground task in which listeners detect temporally coherent tone pips in the cloud of many tone pips that rise at random times at random frequencies. Accuracies from the latter three tasks were used as predictor variables while the sentence-in-noise performance was used as the dependent variable in a multiple linear regression analysis. ResultsNo co-linearity was found between any predictor variables. All the three predictors exhibited significant contribution in the multiple linear regression model, indicating that the ability to detect temporal coherence in a complex auditory scene explains a further amount of variance in CI users SiN performance that was not explained by spectral and temporal resolution. ConclusionsThis result indicates that the across-frequency comparison builds an important auditory cognitive mechanism in CI users SiN understanding. Clinically, this result proposes a novel paradigm to reveal a source of SiN difficulty in CI users and a potential rehabilitative strategy.

7
The parallel auditory brainstem response (pABR) paradigm provides accurate and fast hearing thresholds in a clinic-like setting

Polonenko, M. J.; Maddox, R. K.

2025-11-30 otolaryngology 10.1101/2025.11.26.25341073 medRxiv
Top 0.1%
33.5%
Show abstract

ObjectivesThe auditory brainstem response (ABR) is an essential tool in screening for and diagnosing infant hearing loss, and its results drive decisions regarding interventions and hearing habilitation with impacts extending far into a childs future. Despite the traditional ABR exams usefulness, there is an identified need to develop faster, more informative exams. The parallel ABR (pABR) measures responses to all frequencies of interest in both ears all at once, rather than the traditional series of single-frequency measurements in one ear at a time, greatly speeding the diagnostic exam. The pABR has been shown to be effective at quickly measuring frequency-specific responses in adults with normal hearing, but it has not yet been tested in people with hearing loss. The goal of this study was to determine the accuracy and speed of the pABR for estimating hearing thresholds in a clinic-like setting. DesignSeventy adults with widely varying sensorineural hearing loss configurations were recruited to participate in this study. We measured thresholds at octave frequencies in two ways: the behavioral audiogram, serving as the ground truth, and using the pABR with a custom-designed interactive user interface. Accuracy was determined through threshold correlation coefficients as well as absolute error in decibels. Acquisition time was assessed as the time from measurement start to determination of the final threshold. To determine the pABRs speed advantages, a subset of participants was invited back and their thresholds estimated a third time, using a commercially available clinical system to serially measure ABR waveforms. Speedup was assessed in terms of the raw difference in acquisition time in minutes and as the ratio between measurement times made with the two ABR paradigms. ResultsThresholds estimated with pABR highly correlated with the behavioral audiogram ground truth. The correlation was 0.90 (0.88-0.92, 95% confidence interval) across all ears and frequencies. 79% of pABR thresholds were within one 10-dB step-size of the behavioral threshold. The pABR was faster in all ten participants where traditional serial ABR was also recorded, with a mean recording time of 28 minutes to estimate ten pABR thresholds (500-8000 Hz in each ear) versus 70 minutes to estimate eight serial thresholds (500-4000 Hz in each ear), or a mean reduction of 42 minutes. The median speedup ratio was 2.5x. ConclusionsThe pABR provides accurate threshold estimates with greatly reduced measurement time compared to traditional methods. Given these results and other advantages related to its design, the pABR holds promise as a clinical tool that can be deployed to commercial systems in the near future.

8
Comparing Phoneme and Word Recognition Test Outcomes in Adult CI users: Data Analysis from the AuDieT Study

Migliorini, E.; Wasmann, J.-W. A.; Philpott, N.; van Dijk, B.; Philips, B.; Huinck, W.

2024-03-29 otolaryngology 10.1101/2024.03.28.24304843 medRxiv
Top 0.1%
30.6%
Show abstract

PurposeCurrent clinical measures used in cochlear implantation (CI) provide a broader view of speech recognition ability at word-level, often missing granular details contained at phoneme-level that may be valuable for CI mapping. This study evaluates how outcomes of Phoneme Recognition in Quiet tests (PRQ) differ from those of more commonly used word recognition tests (CVC) and outlines how these tests may be useful for different purposes in clinical adult CI care. MethodsAs part of the AuDiET (Auditory Diagnostics and Error-based Treatment) study, 23 adult postlingually deafened unilateral CI users underwent a battery of tests, including both PRQ and CVC tests. Their results were compared at the phoneme level, including an evaluation of fitness and error dispersion. ResultsPRQ had a significantly lower accuracy and fitness than CVC. The error patterns also tended to be less random and more systematic. Fitness correlated strongly and positively with accuracy, while error dispersion negatively correlated with accuracy. ConclusionThere are clear differences between PRQ and CVC outcomes in absolute accuracy and error distribution. Comparing these tests might provide clinicians with more granular insights into which areas/phonemes to target during mapping, to achieve optimal speech recognition.

9
An initial validation study of DigiBel, a web-application enabling self-assessment of air and bone-conduction audiometry in the community

Sienko, A.; Thirunavukarasu, A. J.; Kuzmich, T.; Allen, L. E.

2023-04-19 otolaryngology 10.1101/2023.04.11.23288179 medRxiv
Top 0.1%
29.8%
Show abstract

80% of primary school children suffer from glue ear which may impair hearing at a critical time for speech acquisition and social development. An online application, DigiBel, has been developed primarily to identify individuals with conductive hearing impairment who may benefit from temporary use of bone-conduction (BC) assistive technology in the community. This preliminary study aims to determine the screening accuracy and usability of DigiBel self-assessed air-conduction (AC) pure tone audiometry (PTA) in adult volunteers with simulated hearing impairment prior to formal clinical validation. Healthy adults, each with one ear plugged, underwent standard automated AC PTA (reference test) and DigiBel audiometry in quiet community settings. Threshold measurements were compared across six tone frequencies and DigiBel test-retest reliability was calculated. The accuracy of DigiBel for detecting more than 20 decibels of hearing impairment was assessed. 30 adults (30 unplugged ears and 30 plugged ears) completed both audiometry tests. DigiBel had 100% sensitivity (95%CI 87.23-100) and 72.73% (95%CI 54.48-86.70) specificity in detecting hearing impairment. Threshold mean bias was insignificant except at 4000 and 8000Hz where a small but significant over-estimation of threshold measurement was identified. All 24 subjects completing feedback rated the DigiBel test good or excellent and 21(87.5%) agreed or strongly agreed that they would be able to do the test at home without help. This study supports the potential use of DigiBel as a screening tool for hearing impairment. The findings will be used to improve the software further and undertake a formal clinical trial of AC and BC audiometry in individuals with suspected conductive hearing impairment. Author SummaryHearing loss is a major global health issue. It can affect many aspects of life such as education, employment, communication, and result in social isolation. Two thirds of people with severe hearing loss live in low and middle countries with poor access both to hearing testing (audiometry) or conventional hearing aids. Several software applications (apps) like DigiBel, studied here, have been developed to enable individuals to test their own hearing in the community. Uniquely, DigiBel has the additional potential to identify individuals with hearing loss who could derive immediate hearing support from an affordable and rechargeable bone-conduction headphone / microphone kit while waiting specialist care. This initial study of DigiBel provides confirmation that the app is easy to use and accurate at detecting simulated hearing impairment. It lays the groundwork for future clinical studies to assess DigiBels performance in children and adults with hearing impairment.

10
Performance Analysis of Speech Recognition Models in Automated Scoring of the QuickSIN Test

Hassanpour, A.; Jiang, Y.; Folkeard, P.; Macpherson, E.; Scollie, S. D.; Parsa, V.

2025-07-25 otolaryngology 10.1101/2025.07.25.25332211 medRxiv
Top 0.1%
28.8%
Show abstract

PurposeBest practices in audiology recommend assessing speech understanding in noisy environments, especially for those with communication difficulties. Speech-in-noise (SiN) assessments such as the QuickSIN are used for validating signal processing in hearing aids (HAs) and are linked to HA satisfaction. This project seeks to enhance QuickSIN test efficiency by applying recent advancements in automatic speech recognition (ASR) technologies. MethodTwenty-three adults with sensorineural hearing loss were fitted bilaterally with Unitron Moxi HAs and were administered the QuickSIN test in low and high reverberation environments. Testing was performed with two different HA programs: an omnidirectional program and a fixed directional microphone program. QuickSIN sentences were presented from 0{degrees} azimuth and competing babble from either 0{degrees}, laterally from 90{degrees} or 270{degrees}, or simultaneously from 90{degrees}, 180{degrees}, and 270{degrees} azimuths. Participants verbal responses to QuickSIN stimuli were scored by an audiologist and were recorded in parallel for offline transcription and scoring by ASR models from Amazon, Microsoft, NVIDIA, and Picovoice. The ASR-derived QuickSIN scores were compared to the corresponding audiologist-derived scores. ResultsRepeated Measures ANOVA results revealed that all ASR models overestimated the QuickSIN scores across most test conditions. Bland-Altman analyses showed that the Amazon ASR model had the least bias and the narrowest range for the limits of agreement, in comparison to the manual scoring by an experienced audiologist. ConclusionsSome ASR models, such as Amazon, demonstrated performance comparable to that of an audiologist in automatically scoring QuickSIN tests. However, further refinements are necessary to increase the robustness of the ASR models in scoring low SNR loss test conditions.

11
From Spectral Resolution to Speech Perception: A Review of Findings in Postlingually Deafened Adult Cochlear Implant Listeners

Ashjaei, S.; Farrar, R.; Paxton, M.; Morgan, K.; Arjmandi, M.

2025-04-29 otolaryngology 10.1101/2025.04.28.25326599 medRxiv
Top 0.1%
28.0%
Show abstract

Reduced spectral resolution ability limits speech recognition in cochlear implant (CI) listeners. While several studies have examined the association between spectral resolution and speech perception, uncertainties persist regarding the strength of this link and related methodological and clinical factors. This review synthesizes prior findings on this relationship in postlingually deafened adult CI listeners using psychophysical measures of spectral resolution, evaluated based on four criteria: (1) whether they consider the categorization nature of speech recognition task, (2) whether they account for modulation frequency within and across spectral channels while capturing essential spectral modulations in speech, (3) whether they assess spectral resolution globally or at a channel-specific level, and (4) their relative time efficiency for clinical application. Many studies report a significant association, with some measures--such as test of spectral ripple discrimination threshold and its modified versions--demonstrating superior predictive capacity, yet no single measure meets all criteria. Our review highlights the critical role of methodological factors and calls for more refined, effective, and channel-specific assessment techniques that better capture the link between spectral resolution and speech perception in CI listeners. These approaches may improve clinical CI programming by addressing poorly functioning electrodes and enhancing perceptual outcomes in postlingually deafened adult CI listeners.

12
Use of Envelope Following Response Normative Ranges for Diagnosing Cochlear Deafferentation

Heassler, A. E.; McMillan, G. P.; Kampel, S. D.; Whittle, N. K.; Szabo, H. A.; Verhulst, S.; Buran, B. N.; Bramhall, N.

2025-10-27 otolaryngology 10.1101/2025.10.24.25338742 medRxiv
Top 0.1%
27.2%
Show abstract

PurposeThe lack of a means for diagnosing cochlear synaptopathy, a type of cochlear deafferentation, prevents clinicians from identifying patients with this auditory deficit and providing them with appropriate treatments. The envelope following response (EFR) has potential as a diagnostic indicator of deafferentation. However, it is not clear what constitutes an abnormal EFR response. The objectives of this study were to establish normative ranges for EFR magnitude in a population at low risk for cochlear synaptopathy and then compare EFRs from a population at high risk for synaptopathy to those normative ranges. MethodsThe low-risk sample consisted of young adults with normal audiograms, minimal reported lifetime noise exposure, and no auditory complaints. Normative ranges were generated using rectangular amplitude modulated (RAM) or sinusoidal amplitude modulated (SAM) EFR stimuli and were adjusted for sex and distortion product otoacoustic emission (DPOAE) levels. The high-risk sample consisted of military Veterans with normal audiograms who reported at least one auditory complaint (tinnitus, decreased sound tolerance, or speech-in-noise difficulty). ResultsThe SAM EFR normative ranges for a 4 kHz carrier resulted in the biggest separation of the low-and high-risk samples, with 36% of Veterans falling below the lower bound of the normative range. There were no consistent effects of DPOAE adjustment on the normative ranges across sex and stimulus condition and computational modeling suggests that adjusting for DPOAEs may not be necessary in individuals with normal audiograms. ConclusionEFR normative ranges for the 4 kHz SAM EFR will allow for clinical identification of patients with normal audiograms who have significant degrees of cochlear deafferentation.

13
Extending the audiogram with loudness growth: revealing complementarity in bimodal aiding

Lambriks, L.; Van Hoof, M.; George, E.; Devocht, E.

2022-10-27 otolaryngology 10.1101/2022.10.24.22281443 medRxiv
Top 0.1%
26.5%
Show abstract

IntroductionClinically, the audiogram is the most commonly used measure when evaluating hearing loss and fitting hearing aids. As an extension, we present the loudness audiogram, which does not only show auditory thresholds but also visualises the full course of loudness perception. MethodsIn a group of 15 bimodal users, loudness growth was measured with the cochlear implant and hearing aid separately using a loudness scaling procedure. Loudness growth curves were constructed, using a novel loudness function, for each modality and then integrated in a graph plotting frequency, stimulus intensity level, and loudness perception. Bimodal benefit, defined as the difference between wearing a cochlear implant and hearing aid together versus wearing only a cochlear implant, was assessed for multiple speech outcomes. ResultsLoudness growth was related to bimodal benefit for speech understanding in noise and to some aspects of speech quality. No correlations between loudness and speech in quiet were found. Patients who had predominantly unequal loudness input from the hearing aid, gained more bimodal benefit for speech understanding in noise compared to those patients whose hearing aid provided mainly equivalent input. DiscussionFitting the cochlear implant and a contralateral hearing aid to create equal loudness at all frequencies may not always be beneficial for speech understanding.

14
Covarying Amplitude Modulation and Pulse Rate Enhances Pitch Discrimination in Cochlear Implant Users

Li, H.; Zhou, H.; Pang, L.; Li, J.; Wei, C.; Wu, P.; Meng, Q.; Zeng, X.

2025-12-04 otolaryngology 10.64898/2025.12.03.25341217 medRxiv
Top 0.1%
25.7%
Show abstract

Pitch plays a fundamental role in prosody, lexical tone, and music perception, yet cochlear implant (CI) users exhibit limited temporal pitch sensitivity, particularly at higher pulse rates (e.g., 300 Hz). This study proposes a covarying pitch-encoding method, where the amplitude-modulation (AM) frequency and the pulse rate vary together in predetermined integer ratios to reinforce temporal periodicity cues. A single-channel psychophysical pitch discrimination experiment was conducted at the most apical electrode in 17 ears from 14 Cochlear CI users around reference frequencies of 50 and 300 Hz. The aims were to examine the effects of the integer ratio on pitch discrimination using the covarying method and to compare the performance of the covarying method with two conventional pitch encoding methods--pulse rate only and AM only. An additional consonant recognition task evaluated speech recognition ability. Results showed that at 50 Hz neither integer ratio nor pitch encoding method significantly affected pitch discrimination thresholds ({approx}30%). At 300 Hz, thresholds were overall higher than at 50 Hz, but the covarying method produced lower thresholds (41.8%) than the pulse rate (52.1%) and AM frequency methods (52.4%), and the covarying-pulse-rate difference was statistically significant. These results suggest that covarying stimulation can modestly enhance pitch discrimination at higher frequencies relative to conventional methods. Regression analyses revealed that temporal pitch discrimination in this psychophysical task at 50 Hz deteriorated with longer CI experience, whereas consonant recognition in a more ecologically relevant speech task improved, suggesting distinct neural adaptation mechanisms.

15
A standardised test to evaluate audio-visual speech intelligibility in French

Le Rhun, L.; Llorach, G.; Delmas, T.; Suied, C.; Arnal, L.; Lazard, D.

2023-01-18 otolaryngology 10.1101/2023.01.18.23284110 medRxiv
Top 0.1%
24.0%
Show abstract

ObjectiveLipreading, which plays a major role in the communication of the hearing impaired, lacked a French standardised tool. Our aim was to create and validate an audio-visual (AV) version of the French Matrix Sentence Test (FrMST). DesignVideo recordings were created by dubbing the existing audio files. SampleThirty-five young, normal-hearing participants were tested in auditory and visual modalities alone (Ao, Vo) and in AV conditions, in quiet, noise, and open and closed-set response formats. ResultsLipreading ability (Vo) varied from 1% to 77%-word comprehension. The absolute AV benefit was 9.25[L]dB SPL in quiet and 4.6[L]dB SNR in noise. The response format did not influence the results in the AV noise condition, except during the training phase. Lipreading ability and AV benefit were significantly correlated. ConclusionsThe French video material achieved similar AV benefits as those described in the literature for AV MST in other languages. For clinical purposes, we suggest targeting SRT80 to avoid ceiling effects, and performing two training lists in the AV condition in noise, followed by one AV list in noise, one Ao list in noise and one Vo list, in a randomised order, in open or close set-format.

16
Cochlear Place Specificity of the Auditory Brainstem Response to Narrowband Chirp versus 2-1-2 stimuli: High-Pass Noise/Derived Responses

Adjekum, R. N.; Stapells, D. R.

2025-11-17 otolaryngology 10.1101/2025.11.16.25340312 medRxiv
Top 0.1%
23.8%
Show abstract

ObjectiveIn recent years, many researchers have recommended using narrowband chirp (NBchirp) stimuli for Auditory Brainstem Response (ABR) audiometry instead of more-standard 2-1-2 cycles linear-gated tones, primarily because NBchirps often result in larger ABR wave V amplitudes. However, the acoustic frequency spectra of currently recommended NBchirps are wider than those for 2-1-2 tones, and it is currently not known whether ABRs to these NBchirps have similar (or poorer) cochlear place specificity compared to 2-1-2 tones. The current study used the high-pass noise/derived response technique to assess the cochlear regions contributing to ABRs evoked by NBchirp versus 2-1-2 stimuli. DesignA total of 24 adults with normal hearing participated (N=12 for each stimulus frequency). Stimuli were 60-dB peSPL 500- and 2000-Hz NBchirps and 2-1-2 tones mixed with high-pass (HP) filtered masking noise. The level of broadband (pink) noise required to mask the ABR was determined individually, then the broadband noise at this level was HP filtered at [1/2]-octave intervals. Three ABR replications were obtained for each condition, with recordings stopped when the residual noise level of each replication was reduced to 40 nanovolts. Derived responses (DRs) representing 1-octave-wide or [1/2]-octave-wide cochlear regions were calculated by subtracting ABRs recorded in HP noise. ResultsNon-masked ABR amplitudes in response to NBchirps were significantly larger than those to 2-1-2 stimuli, averaging 55% larger for 500 Hz and 81% larger for 2000 Hz. For both 500- and 2000-Hz stimuli, HP noise masking produced significant amplitude decreases, occurring 1 to [1/2] octave higher for ABRs to NBchirps versus 2-1-2 tones. One-octave-wide and [1/2]-octave-wide DR amplitude profiles for the ABRs to 2-1-2 tones showed good cochlear place specificity, as described in previous studies. DR results for the NBchirps were similar but showed important differences. The profiles for the 2000-Hz NBchirps showed significantly larger amplitudes in the 4- and 1-kHz DRs compared to the 2-1-2 stimuli. Many more responses were seen 1-octave away for the 2000-Hz NBchirp compared to 2-1-2 tone. DR results for 500-Hz tones showed similar patterns but differences did not quite reach statistical significance, except amplitudes to NBchirps were larger at DR354, DR500 and DR707. A measure of the width of the 1-octave-wide and [1/2]-octave-wide DR amplitude profiles (BW0.075, in Hz) showed the 500- and 2000-Hz NBchirp profiles were significantly wider (32% to 77%) compared to those for 2-1-2 stimuli. As the cochlear area able to respond decreased, wave V amplitudes to NBchirp stimuli decreased more than those for 2-1-2 stimuli, with no difference between stimuli for [1/2]-octave-wide responses. ConclusionABRs to narrowband chirps reflect wider cochlear contributions than those to 2-1-2 tones. Responses to NBchirps arise from cochlear regions as far as one octave away from the stimulus frequency. In contrast, responses to 2-1-2 tones arise from cochlear regions primarily within approximately {+/-}0.5 octaves of the stimulus frequency. Further research in individuals with hearing loss is required to determine whether the wider bandwidths for NBchirps result in threshold mis-estimations, and whether NBchirp amplitude advantages over more-standard stimuli remain with hearing loss.

17
A frequency peak at 3.1 kHz obtained from the spectral analysis of the cochlear implant electrocochleography noise

Herrada, J.; Medel, V.; Dragicevic, C.; Maass, J. C.; Stott, C. E.; Delano, P. H.

2023-09-10 neuroscience 10.1101/2023.09.09.556985 medRxiv
Top 0.1%
22.9%
Show abstract

IntroductionThe functional evaluation of auditory-nerve activity in spontaneous conditions has remained elusive in humans. In animals, the frequency analysis of the round-window electrical noise recorded by means of electrocochleography yields a frequency peak at around 900 to 1000 Hz, which has been proposed to reflect auditory-nerve spontaneous activity. Here, we studied the spectral components of the electrical noise obtained from cochlear implant electrocochleography in humans. MethodsWe recruited adult cochlear implant recipients from the Clinical Hospital of the Universidad de Chile, between the years 2021 and 2022. We used the AIM System from Advanced Bionics(R) to obtain single trial electrocochleography signals from the most apical electrode in cochlear implant users. We performed a protocol to study spontaneous activity and auditory responses to 0.5 and 2 kHz tones ResultsTwenty subjects including 12 females, with a mean age of 57.9 {+/-} 12.6 years (range between 36 and 78 years) were recruited. The electrical noise of the single trial cochlear implant electrocochleography signal yielded a reliable peak at 3.1 kHz in 55% of the cases (11 out of 20 subjects), while an oscillatory pattern that masked the spectrum was observed in seven cases. In the other two cases, the single-trial noise was not classifiable. Auditory stimulation at 0.5 kHz and 2.0 kHz did not change the amplitude of the 3.1 kHz frequency peak. ConclusionWe found two main types of noise patterns in the frequency analysis of the single-trial noise from cochlear implant electrocochleography, including a peak at 3.1 kHz that might reflect auditory-nerve spontaneous activity, while the oscillatory pattern probably corresponds to an artifact.

18
It takes experience to tango: Experienced cochlear implantusers show cortical evoked potentials to naturalistic music

Haumann, N. T.; Petersen, B.; Seeberg, A. B.; Vuust, P.; Brattico, E.

2025-06-08 neuroscience 10.1101/2025.06.04.657805 medRxiv
Top 0.1%
22.7%
Show abstract

Approximately 30% of cochlear implant (CI) users report that restoring their ability to enjoy music is a primary goal. However, music perception in CI users has mostly been investigated in controlled laboratory settings using simplified stimuli, such as pure tones or monophonic melodies. There is an increasing interest in developing objective measures of CI outcomes in everyday listening situations, particularly in music listening, which involves complex stimuli rich in timbre, pitch, rhythm, and overlapping sounds. One promising approach is to measure cortical auditory evoked responses (ERs) in CI users. We investigated whether ERs to sound on-sets in a naturalistic four-minute music piece could be measured in adult CI users (N: 25; ages 18-80; CI experience: 0.3-14 years). We assumed that the accumulation of CI experience might be reflected in the morphology of the ERs. The results confirmed that P2 responses to sound onsets embedded in a whole piece of music can be detected in experienced CI users. Compared to a control group with normal hearing, the CI users showed P2 responses with lower amplitudes and longer latencies. Exploratory linear regression models suggested that the logarithmic duration of CI experience significantly predicted both perceived quality of musical sounds and P2 amplitude, explaining 38% and 28% of the variance, respectively. The findings suggest that music perception outcomes may continue to improve for up to 2-4 years post-implantation. Altogether, the results are consistent with the use of ERs to track CI adaptation to music listening.

19
The Search for a Diagnostic Indicator of Cochlear Deafferentation: Predicting Age and Veteran Status from Auditory Evoked Potential Measures

Buran, B. N.; Thienpont, M.; Kampel, S. D.; Heassler, A. E.; Whittle, N. K.; Szabo, H. A.; Verhulst, S.; Bramhall, N. F.

2025-11-21 otolaryngology 10.1101/2025.11.20.25340672 medRxiv
Top 0.1%
20.4%
Show abstract

ObjectivesCochlear synaptopathy, a type of cochlear deafferentation that occurs with aging and following loud noise exposure, is expected to be common in humans and to have negative impacts on auditory perception. However, there is currently no means for diagnosing cochlear deafferentation in living humans. Auditory brainstem response (ABR) wave I amplitude and the envelope following response (EFR) are auditory evoked potentials that have been proposed as potential non-invasive indicators of cochlear deafferentation. However, these measures may be impacted by outer hair cell (OHC) dysfunction, making them difficult to interpret. One potential method for estimating the degree of deafferentation in individual patients is to combine evoked potential and distortion product otoacoustic emission (DPOAE) measurements with a computational model of the auditory periphery (CMAP). The goal of this study was to evaluate the ability of auditory evoked potentials, with and without the CMAP, to predict risk factors for cochlear synaptopathy (age and history of military noise exposure). DesignIn a population of military Veterans and non-Veterans with up to a mild sensorineural hearing loss, a CMAP was used with Bayesian regression to predict synapse numbers across cochlear frequency (synaptograms) for individual human participants based on their ABR, EFR, and/or DPOAE measurements. Linear regression models were then used to evaluate the ability of the synaptograms and various ABR wave I amplitude, EFR magnitude, and DPOAE measurements to predict age and Veteran status. All Veterans were assumed to have at least some history of military noise exposure. ResultsHigh frequency (4 and 5.6 kHz) ABR wave I amplitude measurements and synaptograms generated from high frequency ABR wave I amplitudes performed the best at predicting participant age. Accounting for OHC function (as indicated by DPOAEs) in the generation of the synaptograms or by including DPOAEs in the linear regression models had limited impact on the ability of ABR wave I amplitudes to predict age. DPOAEs were highly predictive of Veteran status, making it difficult to isolate the ability of the auditory evoked potentials to predict Veteran status. ConclusionsHigh frequency ABR wave I amplitudes and synaptograms generated from high frequency ABR wave I amplitudes were able to predict participant age within approximately 6 years, with or without incorporating DPOAE measurements. This suggests that high frequency ABR wave I amplitude measurements are good candidates for non-invasive diagnosis of age-related cochlear deafferentation and it may not be necessary to use the CMAP or measure DPOAEs to predict deafferentation in individual patients. Unfortunately, specific recommendations for predicting noise-induced cochlear deafferentation could not be ascertained from this study due to confounding related to OHC dysfunction.

20
Normative ranges for auditory brainstem response wave I amplitude: A potential diagnostic indicator of cochlear deafferentation

Kampel, S. D.; McMillan, G. P.; Heassler, A. E.; Whittle, N. K.; Szabo, H. A.; Bramhall, N. F.

2025-11-15 otolaryngology 10.1101/2025.11.13.25340158 medRxiv
Top 0.1%
20.1%
Show abstract

PurposeWhile cochlear synaptopathy has limited impact on auditory thresholds, there is increasing evidence that cochlear deafferentation is associated with auditory perceptual deficits. However, there is currently no means for diagnosing synaptopathy or deafferentation in individual patients. The objectives of this study were to establish normative ranges for auditory brainstem response (ABR) wave I amplitude, a measure sensitive to synaptopathy in animals, in a population at low risk for synaptopathy and then compare a population at high-risk for synaptopathy to the normative ranges. MethodsThe low-risk sample consisted of 169 non-Veteran young adults with normal audiograms, minimal self-reported noise exposure history, and no auditory complaints (tinnitus, decreased sound tolerance, or speech perception in noise difficulty). ABR wave I amplitude normative ranges were generated for 2, 4, and 8 kHz tonebursts and were statistically adjusted for sex and average distortion product otoacoustic emission (DPOAE) levels. Ninety-one military Veterans with normal audiograms and at least one auditory complaint were included in the high-risk comparison sample. ResultsWhile the DPOAE-adjusted ABR normative ranges were effective at distinguishing between the low- and high-risk samples, the results also indicated that adjusting the ABR normative ranges for OHC dysfunction may not be necessary and could be problematic. ABR normative ranges that were adjusted only for sex were able to differentiate between the low- and high-risk samples, with 51% of the high-risk sample falling below the normative ranges for a 105 dB peSPL 8 kHz toneburst. ConclusionsIn patients with normal audiograms, sex-specific ABR wave I amplitude normative ranges can be used by clinicians to identify patients with high degrees of cochlear deafferentation.